104 research outputs found
Universal Sorting: Finding a DAG using Priced Comparisons
We resolve two open problems in sorting with priced information, introduced
by [Charikar, Fagin, Guruswami, Kleinberg, Raghavan, Sahai (CFGKRS), STOC
2000]. In this setting, different comparisons have different (potentially
infinite) costs. The goal is to sort with small competitive ratio (algorithmic
cost divided by cheapest proof).
1) When all costs are in , we give an algorithm that has
competitive ratio. Our algorithm generalizes the
algorithms for generalized sorting (all costs are either or ), a
version initiated by [Huang, Kannan, Khanna, FOCS 2011] and addressed recently
by [Kuszmaul, Narayanan, FOCS 2021].
2) We answer the problem of bichromatic sorting posed by [CFGKRS]: The input
is split into and , and and comparisons are more expensive
than an comparisons. We give a randomized algorithm with a O(polylog n)
competitive ratio.
These results are obtained by introducing the universal sorting problem,
which generalizes the existing framework in two important ways. We remove the
promise of a directed Hamiltonian path in the DAG of all comparisons. Instead,
we require that an algorithm outputs the transitive reduction of the DAG. For
bichromatic sorting, when and comparisons cost , this
generalizes the well-known nuts and bolts problem. We initiate an
instance-based study of the universal sorting problem. Our definition of
instance-optimality is inherently more algorithmic than that of the competitive
ratio in that we compare the cost of a candidate algorithm to the cost of the
optimal instance-aware algorithm. This unifies existing lower bounds, and opens
up the possibility of an -instance optimal algorithm for the bichromatic
version.Comment: 40 pages, 5 figure
An Algorithm for Bichromatic Sorting with Polylog Competitive Ratio
The problem of sorting with priced information was introduced by [Charikar,
Fagin, Guruswami, Kleinberg, Raghavan, Sahai (CFGKRS), STOC 2000]. In this
setting, different comparisons have different (potentially infinite) costs. The
goal is to find a sorting algorithm with small competitive ratio, defined as
the (worst-case) ratio of the algorithm's cost to the cost of the cheapest
proof of the sorted order.
The simple case of bichromatic sorting posed by [CFGKRS] remains open: We are
given two sets and of total size , and the cost of an
comparison or a comparison is higher than an comparison. The goal
is to sort . An lower bound on competitive ratio
follows from unit-cost sorting. Note that this is a generalization of the
famous nuts and bolts problem, where and comparisons have infinite
cost, and elements of and are guaranteed to alternate in the final
sorted order.
In this paper we give a randomized algorithm InversionSort with an
almost-optimal w.h.p. competitive ratio of . This is the first
algorithm for bichromatic sorting with a competitive ratio.Comment: 18 pages, accepted to ITCS 2024. arXiv admin note: text overlap with
arXiv:2211.0460
External memory priority queues with decrease-key and applications to graph algorithms
We present priority queues in the external memory model with block size B and main memory size M that support on N elements, operation Update (a combination of operations Insert and DecreaseKey) in O(1/Blog_{M/B} N/B) amortized I/Os and operations ExtractMin and Delete in O(ceil[(M^epsilon)/B log_{M/B} N/B] log_{M/B} N/B) amortized I/Os, for any real epsilon in (0,1), using O(N/Blog_{M/B} N/B) blocks. Previous I/O-efficient priority queues either support these operations in O(1/Blog_2 N/B) amortized I/Os [Kumar and Schwabe, SPDP \u2796] or support only operations Insert, Delete and ExtractMin in optimal O(1/Blog_{M/B} N/B) amortized I/Os, however without supporting DecreaseKey [Fadel et al., TCS \u2799].
We also present buffered repository trees that support on a multi-set of N elements, operation Insert in O(1/Blog_M/B N/B) I/Os and operation Extract on K extracted elements in O(M^{epsilon} log_M/B N/B + K/B) amortized I/Os, using O(N/B) blocks. Previous results achieve O(1/Blog_2 N/B) I/Os and O(log_2 N/B + K/B) I/Os, respectively [Buchsbaum et al., SODA \u2700].
Our results imply improved O(E/Blog_{M/B} E/B) I/Os for single-source shortest paths, depth-first search and breadth-first search algorithms on massive directed dense graphs (V,E) with E = Omega (V^(1+epsilon)), epsilon > 0 and V = Omega (M), which is equal to the I/O-optimal bound for sorting E values in external memory
- …